2 research outputs found
Red Teaming Deep Neural Networks with Feature Synthesis Tools
Interpretable AI tools are often motivated by the goal of understanding model
behavior in out-of-distribution (OOD) contexts. Despite the attention this area
of study receives, there are comparatively few cases where these tools have
identified previously unknown bugs in models. We argue that this is due, in
part, to a common feature of many interpretability methods: they analyze model
behavior by using a particular dataset. This only allows for the study of the
model in the context of features that the user can sample in advance. To
address this, a growing body of research involves interpreting models using
\emph{feature synthesis} methods that do not depend on a dataset.
In this paper, we benchmark the usefulness of interpretability tools on
debugging tasks. Our key insight is that we can implant human-interpretable
trojans into models and then evaluate these tools based on whether they can
help humans discover them. This is analogous to finding OOD bugs, except the
ground truth is known, allowing us to know when an interpretation is correct.
We make four contributions. (1) We propose trojan discovery as an evaluation
task for interpretability tools and introduce a benchmark with 12 trojans of 3
different types. (2) We demonstrate the difficulty of this benchmark with a
preliminary evaluation of 16 state-of-the-art feature attribution/saliency
tools. Even under ideal conditions, given direct access to data with the trojan
trigger, these methods still often fail to identify bugs. (3) We evaluate 7
feature-synthesis methods on our benchmark. (4) We introduce and evaluate 2 new
variants of the best-performing method from the previous evaluation. A website
for this paper and its code is at
https://benchmarking-interpretability.csail.mit.edu/Comment: In Proceedings of the 37th Conference on Neural Information
Processing Systems (NeurIPS 2023
Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks
This paper considers the problem of helping humans exercise scalable
oversight over deep neural networks (DNNs). Adversarial examples can be useful
by helping to reveal weaknesses in DNNs, but they can be difficult to interpret
or draw actionable conclusions from. Some previous works have proposed using
human-interpretable adversarial attacks including copy/paste attacks in which
one natural image pasted into another causes an unexpected misclassification.
We build on these with two contributions. First, we introduce Search for
Natural Adversarial Features Using Embeddings (SNAFUE) which offers a fully
automated method for finding copy/paste attacks. Second, we use SNAFUE to red
team an ImageNet classifier. We reproduce copy/paste attacks from previous
works and find hundreds of other easily-describable vulnerabilities, all
without a human in the loop. Code is available at
https://github.com/thestephencasper/snafueComment: Best paper award at the NeurIPS 2022 ML Safety Workshop --
https://neurips2022.mlsafety.org